54 research outputs found

    Efficient sampling of non log-concave posterior distributions with mixture of noises

    Full text link
    This paper focuses on a challenging class of inverse problems that is often encountered in applications. The forward model is a complex non-linear black-box, potentially non-injective, whose outputs cover multiple decades in amplitude. Observations are supposed to be simultaneously damaged by additive and multiplicative noises and censorship. As needed in many applications, the aim of this work is to provide uncertainty quantification on top of parameter estimates. The resulting log-likelihood is intractable and potentially non-log-concave. An adapted Bayesian approach is proposed to provide credibility intervals along with point estimates. An MCMC algorithm is proposed to deal with the multimodal posterior distribution, even in a situation where there is no global Lipschitz constant (or it is very large). It combines two kernels, namely an improved version of (Preconditioned Metropolis Adjusted Langevin) PMALA and a Multiple Try Metropolis (MTM) kernel. Whenever smooth, its gradient admits a Lipschitz constant too large to be exploited in the inference process. This sampler addresses all the challenges induced by the complex form of the likelihood. The proposed method is illustrated on classical test multimodal distributions as well as on a challenging and realistic inverse problem in astronomy

    Neural network-based emulation of interstellar medium models

    Full text link
    The interpretation of observations of atomic and molecular tracers in the galactic and extragalactic interstellar medium (ISM) requires comparisons with state-of-the-art astrophysical models to infer some physical conditions. Usually, ISM models are too time-consuming for such inference procedures, as they call for numerous model evaluations. As a result, they are often replaced by an interpolation of a grid of precomputed models. We propose a new general method to derive faster, lighter, and more accurate approximations of the model from a grid of precomputed models. These emulators are defined with artificial neural networks (ANNs) designed and trained to address the specificities inherent in ISM models. Indeed, such models often predict many observables (e.g., line intensities) from just a few input physical parameters and can yield outliers due to numerical instabilities or physical bistabilities. We propose applying five strategies to address these characteristics: 1) an outlier removal procedure; 2) a clustering method that yields homogeneous subsets of lines that are simpler to predict with different ANNs; 3) a dimension reduction technique that enables to adequately size the network architecture; 4) the physical inputs are augmented with a polynomial transform to ease the learning of nonlinearities; and 5) a dense architecture to ease the learning of simple relations. We compare the proposed ANNs with standard classes of interpolation methods to emulate the Meudon PDR code, a representative ISM numerical model. Combinations of the proposed strategies outperform all interpolation methods by a factor of 2 on the average error, reaching 4.5% on the Meudon PDR code. These networks are also 1000 times faster than accurate interpolation methods and require ten to forty times less memory. This work will enable efficient inferences on wide-field multiline observations of the ISM

    Deep learning denoising by dimension reduction: Application to the ORION-B line cubes

    Get PDF
    Context. The availability of large bandwidth receivers for millimeter radio telescopes allows the acquisition of position-position-frequency data cubes over a wide field of view and a broad frequency coverage. These cubes contain much information on the physical, chemical, and kinematical properties of the emitting gas. However, their large size coupled with inhomogenous signal-to-noise ratio (SNR) are major challenges for consistent analysis and interpretation.Aims. We search for a denoising method of the low SNR regions of the studied data cubes that would allow to recover the low SNR emission without distorting the signals with high SNR.Methods. We perform an in-depth data analysis of the 13 CO and C 17 O (1 -- 0) data cubes obtained as part of the ORION-B large program performed at the IRAM 30m telescope. We analyse the statistical properties of the noise and the evolution of the correlation of the signal in a given frequency channel with that of the adjacent channels. This allows us to propose significant improvements of typical autoassociative neural networks, often used to denoise hyperspectral Earth remote sensing data. Applying this method to the 13 CO (1 -- 0) cube, we compare the denoised data with those derived with the multiple Gaussian fitting algorithm ROHSA, considered as the state of the art procedure for data line cubes.Results. The nature of astronomical spectral data cubes is distinct from that of the hyperspectral data usually studied in the Earth remote sensing literature because the observed intensities become statistically independent beyond a short channel separation. This lack of redundancy in data has led us to adapt the method, notably by taking into account the sparsity of the signal along the spectral axis. The application of the proposed algorithm leads to an increase of the SNR in voxels with weak signal, while preserving the spectral shape of the data in high SNR voxels.Conclusions. The proposed algorithm that combines a detailed analysis of the noise statistics with an innovative autoencoder architecture is a promising path to denoise radio-astronomy line data cubes. In the future, exploring whether a better use of the spatial correlations of the noise may further improve the denoising performances seems a promising avenue. In addition

    Gas kinematics around filamentary structures in the Orion B cloud

    Get PDF
    Context. Understanding the initial properties of star-forming material and how they affect the star formation process is key. From an observational point of view, the feedback from young high-mass stars on future star formation properties is still poorly constrained. Aims. In the framework of the IRAM 30m ORION-B large program, we obtained observations of the translucent (2 ≀ AV < 6 mag) and moderately dense gas (6 ≀ AV < 15 mag), which we used to analyze the kinematics over a field of 5 deg2 around the filamentary structures. Methods. We used the Regularized Optimization for Hyper-Spectral Analysis (ROHSA) algorithm to decompose and de-noise the C 18 O(1−0) and 13CO(1−0) signals by taking the spatial coherence of the emission into account. We produced gas column density and mean velocity maps to estimate the relative orientation of their spatial gradients. Results. We identified three cloud velocity layers at different systemic velocities and extracted the filaments in each velocity layer. The filaments are preferentially located in regions of low centroid velocity gradients. By comparing the relative orientation between the column density and velocity gradients of each layer from the ORION-B observations and synthetic observations from 3D kinematic toy models, we distinguish two types of behavior in the dynamics around filaments: (i) radial flows perpendicular to the filament axis that can be either inflows (increasing the filament mass) or outflows and (ii) longitudinal flows along the filament axis. The former case is seen in the Orion B data, while the latter is not identified. We have also identified asymmetrical flow patterns, usually associated with filaments located at the edge of an H II region. Conclusions. This is the first observational study to highlight feedback from H II regions on filament formation and, thus, on star formation in the Orion B cloud. This simple statistical method can be used for any molecular cloud to obtain coherent information on the kinematics

    MĂ©thodes d’échantillonnage pour l’infĂ©rence statistique de problĂšmes inverses non linĂ©aires : distribution spatiale des propriĂ©tĂ©s physico-chimiques du milieu interstellaire

    No full text
    The interstellar medium (ISM) is a very diffuse medium that fills the extraordinarily large volume between celestial objects such as stars and black holes in a galaxy. The study of the ISM raises fundamental questions including star formation. Stars are born from the gravitational collapse of a part of cold and dense regions of the ISM called molecular clouds.This thesis analyzes multispectral maps of molecular clouds in the infrared and radio domains, observed by space or ground telescopes. The focus is put on clouds that are illuminated and heated by nearby massive stars emitting UV photons. The surface layer of such clouds, where the UV irradiation heats and dissociates the molecular gas, is called a photodissociation region (PDR). Their multispectral maps typically contain from 1 to 10 000 pixels, where each pixel contains the integrated intensities of 5 to 30 emission lines. These intensities can be compared with the predictions of an ISM numerical model such as the Meudon PDR code that computes intensities from physical parameters. This thesis aims at estimating maps of physical parameters (such as the thermal pressure or the intensity of the incident UV field) from an observation map and the Meudon PDR code. This problem is an instance of a general class of inverse problems.A new inference method that accounts for as many uncertainty sources as possible is introduced. A general procedure to derive a surrogate approximation of numerical models is proposed. It is based on a specific neural network and outperforms interpolation methods in accuracy, memory weight and evaluation time. A spatial regularization improves estimations. A sampling approach is considered to provide uncertainty quantification along with the estimated physical parameter maps to address the absence of ground truth, inherent to astrophysics. The proposed Monte Carlo Markov Chain (MCMC) algorithm combines two samplers: one identifies local min- ima in the parameters space while the second efficiently explores them. Finally, the relevance of the observation model considered for inference is assessed. The proposed method is applied to synthetic data for validation and then to real observations. The results are analyzed for astrophysical interpretation.Le milieu interstellaire (MIS) est un milieu trĂšs diffus qui remplit l’immense volume entre les objets cĂ©lestes tels que les Ă©toiles et les trous noirs au sein d’une galaxie. L’étude du MIS soulĂšve des questions fondamentales dont la formation d’étoiles. Les Ă©toiles naissent de l’effondrement gravitationnel de parties de rĂ©gions froides et denses du MIS appelĂ©es nuages molĂ©culaires.Cette thĂšse analyse des cartes multispectrales de nuages molĂ©culaires dans les domaines infrarouge lointain et radio, obtenues par des tĂ©lescopes spatiaux ou terrestres. L’attention est portĂ©e aux nuages illuminĂ©s et chauffĂ©s par des Ă©toiles massives voisines Ă©mettant des photons UV. La couche de surface de tels nuages, oĂč le champ radiatif UV chauffe et dissocie le gaz molĂ©culaire, est appelĂ©e rĂ©gion de photodissociation (PDR). Leur cartes multispectrales contiennent typiquement de 1 Ă  10 000 pixels, oĂč chaque pixel contient l’intensitĂ© intĂ©grĂ©e de 5 Ă  30 raies d’émission. Ces intensitĂ©s peuvent ĂȘtre comparĂ©es avec les prĂ©dictions d’un modĂšle numĂ©rique du MIS tel que le code PDR de Meudon, qui calcule ces intensitĂ©s Ă  partir de paramĂštres physiques. Cette thĂšse vise Ă  estimer des cartes de paramĂštres physiques (tels que la pression thermique ou l’intensitĂ© du champ UV incident) Ă  partir d’une carte d’observation et du code PDR de Meudon. Ce problĂšme est une instance d’une classe gĂ©nĂ©rale de problĂšmes inverses.Une nouvelle mĂ©thode d’infĂ©rence tenant compte d’autant de sources d’incertitudes que possible est introduite. Une procĂ©dure gĂ©nĂ©rale est proposĂ©e pour construire une approximation de modĂšles numĂ©riques. Elle exploite un rĂ©seau de neurones spĂ©cifique et surpasse les mĂ©thodes d’interpolation en terme de prĂ©cision, de poids mĂ©moire et de durĂ©e d’évaluation. Une rĂ©gularisation spatiale amĂ©liore les estimations. Une approche par Ă©chantillonnage est considĂ©rĂ©e pour fournir des quantification d’incertitudes en plus d’estimateurs ponctuels de cartes de paramĂštres physiques afin de compenser l’absence vĂ©ritĂ© terrain, inhĂ©rente Ă  l’astrophysique. L’algorithme Monte Carlo Markov chain (MCMC) proposĂ© combine deux Ă©chantillonneurs: l’un identifie les minima locaux dans l’espace des paramĂštres tandis que l’autre les explore efficacement. Finale- ment, la pertinence du modĂšle d’observation considĂ©rĂ© pour l’infĂ©rence est vĂ©rifiĂ©e. La mĂ©thode proposĂ©e est appliquĂ©e Ă  des donnĂ©es synthĂ©tiques pour validation, puis Ă  des observations rĂ©elles. Les rĂ©sultats sont analysĂ©s pour fournir des interprĂ©tations astrophysiques

    Tertiu[m] scriptu[m] super tertium sententiarum

    Get PDF
    Fol. i-ccxliiAdams, P-123Pie de imprenta tomado de colofĂłn, en port.: "Venundant Parisius a Claudio Chevallon"Marca tip. en port.Sign.: aa8, a-z8, [et]8, [con]8, [rum]8, A6, B-C8, D-E6L. gĂłt. Inic. grab. Texto a 2 col. Apost. marg.Port. con grab. calc. arquitectĂłnic
    • 

    corecore